11 research outputs found

    Placing Conditional Disclosure of Secrets in the Communication Complexity Universe

    Get PDF
    In the conditional disclosure of secrets (CDS) problem (Gertner et al., J. Comput. Syst. Sci., 2000) Alice and Bob, who hold n-bit inputs x and y respectively, wish to release a common secret z to Carol (who knows both x and y) if and only if the input (x,y) satisfies some predefined predicate f. Alice and Bob are allowed to send a single message to Carol which may depend on their inputs and some shared randomness, and the goal is to minimize the communication complexity while providing information-theoretic security. Despite the growing interest in this model, very few lower-bounds are known. In this paper, we relate the CDS complexity of a predicate f to its communication complexity under various communication games. For several basic predicates our results yield tight, or almost tight, lower-bounds of Omega(n) or Omega(n^{1-epsilon}), providing an exponential improvement over previous logarithmic lower-bounds. We also define new communication complexity classes that correspond to different variants of the CDS model and study the relations between them and their complements. Notably, we show that allowing for imperfect correctness can significantly reduce communication - a seemingly new phenomenon in the context of information-theoretic cryptography. Finally, our results show that proving explicit super-logarithmic lower-bounds for imperfect CDS protocols is a necessary step towards proving explicit lower-bounds against the class AM, or even AM cap coAM - a well known open problem in the theory of communication complexity. Thus imperfect CDS forms a new minimal class which is placed just beyond the boundaries of the "civilized" part of the communication complexity world for which explicit lower-bounds are known

    Collision-Resistance from Multi-Collision-Resistance

    Get PDF
    Collision-resistant hash functions (CRH) are a fundamental and ubiquitous cryptographic primitive. Several recent works have studied a relaxation of CRH called t-way multi-collision-resistant hash functions (t-MCRH). These are families of functions for which it is computationally hard to find a t-way collision, even though such collisions are abundant (and even (t-1)-way collisions may be easy to find). The case of t=2 corresponds to standard CRH, but it is natural to study t-MCRH for larger values of t. Multi-collision-resistance seems to be a qualitatively weaker property than standard collision-resistance. Nevertheless, in this work we show a non-blackbox transformation of any moderately shrinking t-MCRH, for t in {2,4}, into an (infinitely often secure) CRH. This transformation is non-constructive - we can prove the existence of a CRH but cannot explicitly point out a construction. Our result partially extends to larger values of t. In particular, we show that for suitable values of t>t\u27, we can transform a t-MCRH into a t\u27-MCRH, at the cost of reducing the shrinkage of the resulting hash function family and settling for infinitely often security. This result utilizes the list-decodability properties of Reed-Solomon codes

    Cryptography from Information Loss

    Get PDF
    © Marshall Ball, Elette Boyle, Akshay Degwekar, Apoorvaa Deshpande, Alon Rosen, Vinod. Reductions between problems, the mainstay of theoretical computer science, efficiently map an instance of one problem to an instance of another in such a way that solving the latter allows solving the former.1 The subject of this work is “lossy” reductions, where the reduction loses some information about the input instance. We show that such reductions, when they exist, have interesting and powerful consequences for lifting hardness into “useful” hardness, namely cryptography. Our first, conceptual, contribution is a definition of lossy reductions in the language of mutual information. Roughly speaking, our definition says that a reduction C is t-lossy if, for any distribution X over its inputs, the mutual information I(X; C(X)) ≤ t. Our treatment generalizes a variety of seemingly related but distinct notions such as worst-case to average-case reductions, randomized encodings (Ishai and Kushilevitz, FOCS 2000), homomorphic computations (Gentry, STOC 2009), and instance compression (Harnik and Naor, FOCS 2006). We then proceed to show several consequences of lossy reductions: 1. We say that a language L has an f-reduction to a language L0 for a Boolean function f if there is a (randomized) polynomial-time algorithm C that takes an m-tuple of strings X = (x1, . . ., xm), with each xi ∈ {0, 1}n, and outputs a string z such that with high probability, L0(z) = f(L(x1), L(x2), . . ., L(xm)) Suppose a language L has an f-reduction C to L0 that is t-lossy. Our first result is that one-way functions exist if L is worst-case hard and one of the following conditions holds: f is the OR function, t ≤ m/100, and L0 is the same as L f is the Majority function, and t ≤ m/100 f is the OR function, t ≤ O(m log n), and the reduction has no error This improves on the implications that follow from combining (Drucker, FOCS 2012) with (Ostrovsky and Wigderson, ISTCS 1993) that result in auxiliary-input one-way functions. 2. Our second result is about the stronger notion of t-compressing f-reductions – reductions that only output t bits. We show that if there is an average-case hard language L that has a t-compressing Majority reduction to some language for t = m/100, then there exist collision-resistant hash functions. This improves on the result of (Harnik and Naor, STOC 2006), whose starting point is a cryptographic primitive (namely, one-way functions) rather than average-case hardness, and whose assumption is a compressing OR-reduction of SAT (which is now known to be false unless the polynomial hierarchy collapses). Along the way, we define a non-standard one-sided notion of average-case hardness, which is the notion of hardness used in the second result above, that may be of independent interest

    Fine-grained Cryptography

    Get PDF
    Fine-grained cryptographic primitives are ones that are secure against adversaries with a-priori bounded polynomial resources (time, space or parallel-time), where the honest algorithms use less resources than the adversaries they are designed to fool. Such primitives were previously studied in the context of time-bounded adversaries (Merkle, CACM 1978), space-bounded adversaries (Cachin and Maurer, CRYPTO 1997) and parallel-time-bounded adversaries (Håstad, IPL 1987). Our goal is to show unconditional security of these constructions when possible, or base security on widely believed separation of worst-case complexity classes. We show: NC1^1-cryptography: Under the assumption that NC1^1 \neq \oplusL/poly, we construct one-way functions, pseudo-random generators (with sub-linear stretch), collision-resistant hash functions and most importantly, public-key encryption schemes, all computable in NC1^1 and secure against all NC1^1 circuits. Our results rely heavily on the notion of randomized encodings pioneered by Applebaum, Ishai and Kushilevitz, and crucially, make {\em non-black-box} use of randomized encodings for logspace classes. AC0^0-cryptography: We construct (unconditionally secure) pseudo-random generators with arbitrary polynomial stretch, weak pseudo-random functions, secret-key encryption and perhaps most interestingly, {\em collision-resistant hash functions}, computable in AC0^0 and secure against all AC^00 circuits. Previously, one-way permutations and pseudo-random generators (with linear stretch) computable in AC0^0 and secure against AC0^0 circuits were known from the works of Håstad and Braverman

    The Planted kk-SUM Problem: Algorithms, Lower Bounds, Hardness Amplification, and Cryptography

    Get PDF
    In the average-case kk-SUM problem, given rr integers chosen uniformly at random from {0,,M1}\{0,\ldots,M-1\}, the objective is to find a set of kk numbers that sum to 0 modulo MM (this set is called a solution ). In the related kk-XOR problem, given kk uniformly random Boolean vectors of length log MM, the objective is to find a set of kk of them whose bitwise-XOR is the all-zero vector. Both of these problems have widespread applications in the study of fine-grained complexity and cryptanalysis. The feasibility and complexity of these problems depends on the relative values of kk, rr, and MM. The dense regime of MrkM \leq r^k, where solutions exist with high probability, is quite well-understood and we have several non-trivial algorithms and hardness conjectures here. Much less is known about the sparse regime of MrkM\gg r^k, where solutions are unlikely to exist. The best answers we have for many fundamental questions here are limited to whatever carries over from the dense or worst-case settings. We study the planted kk-SUM and kk-XOR problems in the sparse regime. In these problems, a random solution is planted in a randomly generated instance and has to be recovered. As MM increases past rkr^k, these planted solutions tend to be the only solutions with increasing probability, potentially becoming easier to find. We show several results about the complexity and applications of these problems. Conditional Lower Bounds. Assuming established conjectures about the hardness of average-case (non-planted) kk-SUM when M=rkM = r^k, we show non-trivial lower bounds on the running time of algorithms for planted kk-SUM when rkMr2kr^k\leq M\leq r^{2k}. We show the same for kk-XOR as well. Search-to-Decision Reduction. For any M>rkM>r^k, suppose there is an algorithm running in time TT that can distinguish between a random kk-SUM instance and a random instance with a planted solution, with success probability (1o(1))(1-o(1)). Then, for the same MM, there is an algorithm running in time O~(T)\tilde{O}(T) that solves planted kk-SUM with constant probability. The same holds for kk-XOR as well. Hardness Amplification. For any MrkM \geq r^k, if an algorithm running in time TT solves planted kk-XOR with success probability Ω(1/polylog(r))\Omega(1/\text{polylog}(r)), then there is an algorithm running in time O~(T)\tilde O(T) that solves it with probability (1o(1))(1-o(1)). We show this by constructing a rapidly mixing random walk over kk-XOR instances that preserves the planted solution. Cryptography. For some M2polylog(r)M \leq 2^{\text{polylog}(r)}, the hardness of the kk-XOR problem can be used to construct Public-Key Encryption (PKE) assuming that the Learning Parity with Noise (LPN) problem with constant noise rate is hard for 2n0.012^{n^{0.01}}-time algorithms. Previous constructions of PKE from LPN needed either a noise rate of O(1/n)O(1/\sqrt{n}), or hardness for 2n0.52^{n^{0.5}}-time algorithms. Algorithms. For any M2r2M \geq 2^{r^2}, there is a constant cc (independent of kk) and an algorithm running in time rcr^c that, for any kk, solves planted kk-SUM with success probability Ω(1/8k)\Omega(1/8^k). We get this by showing an average-case reduction from planted kk-SUM to the Subset Sum problem. For rkM2r2r^k \leq M \ll 2^{r^2}, the best known algorithms are still the worst-case kk-SUM algorithms running in time rk/2o(1)r^{\lceil{k/2}\rceil-o(1)}

    Proofs of Work from Worst-Case Assumptions

    Get PDF
    We give Proofs of Work (PoWs) whose hardness is based on well-studied worst-case assumptions from fine-grained complexity theory. This extends the work of (Ball et al., STOC \u2717), that presents PoWs that are based on the Orthogonal Vectors, 3SUM, and All-Pairs Shortest Path problems. These, however, were presented as a `proof of concept\u27 of provably secure PoWs and did not fully meet the requirements of a conventional PoW: namely, it was not shown that multiple proofs could not be generated faster than generating each individually. We use the considerable algebraic structure of these PoWs to prove that this non-amortizability of multiple proofs does in fact hold and further show that the PoWs\u27 structure can be exploited in ways previous heuristic PoWs could not. This creates full PoWs that are provably hard from worst-case assumptions (previously, PoWs were either only based on heuristic assumptions or on much stronger cryptographic assumptions (Bitansky et al., ITCS \u2716)) while still retaining significant structure to enable extra properties of our PoWs. Namely, we show that the PoWs of (Ball et al, STOC \u2717) can be modified to have much faster verification time, can be proved in zero knowledge, and more. Finally, as our PoWs are based on evaluating low-degree polynomials originating from average-case fine-grained complexity, we prove an average-case direct sum theorem for the problem of evaluating these polynomials, which may be of independent interest. For our context, this implies the required non-amortizability of our PoWs

    A study of efficient secret sharing

    No full text
    Thesis: S.M., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2015.Cataloged from PDF version of thesis.Includes bibliographical references (pages 49-52).We show a general connection between various types of statistical zero-knowledge (SZK) proof systems and (unconditionally secure) secret sharing schemes. Viewed through the SZK lens, we obtain several new results on secret-sharing: " Characterizations: We obtain an almost-characterization of access structures for which there are secret-sharing schemes with an efficient sharing algorithm (but not necessarily efficient reconstruction). In particular, we show that for every language L [set membership] SZKL (the class of languages that have statistical zero knowledge proofs with log-space verifiers and simulators), a (monotonized) access structure associated with L has such a secret-sharing scheme. Conversely, we show that such secret-sharing schemes can only exist for languages in SZK. " Constructions: We show new constructions of secret-sharing schemes with efficient sharing and reconstruction for access structures that are in P, but are not known to be in NC, namely Bounded-Degree Graph Isomorphism and constant-dimensional lattice problems. In particular, this gives us the first combinatorial access structure that is conjectured to be outside NC but has an efficient secret-sharing scheme. Previous such constructions (Beimel and Ishai; CCC 2001) were algebraic and number-theoretic in nature. " Limitations: We show that universally-efficient secret-sharing schemes, where the complexity of computing the shares is a polynomial independent of the complexity of deciding the access structure, cannot exist for all (monotone languages in) P, unless there is a polynomial q such that P [subset] DSPACE(q(n)).by Prashant Vasudevan.S.M

    Fine-grained cryptography

    No full text
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.Cataloged from PDF version of thesis.Includes bibliographical references (pages 167-180).Fine-grained cryptography is the study of cryptographic objects that are required to be secure only against adversaries that are moderately more powerful than the honest parties. This weakening in security requirements opens up possibilities for meaningful cryptographic constructions in various settings using hardness assumptions that are considerably weaker than those used in standard cryptography. In this thesis, we study these possibilities in two different settings. First, we present functions that are hard to compute on average for algorithms running in some fixed polynomial time, assuming widely-conjectured worst-case hardness of certain problems from the study of fine-grained complexity. We also construct a proof-of-work protocol based on this hardness and certain structural properties of our functions. Second, we construct several unconditionally secure cryptographic primitives that are computable by and secure against constant-depth circuits. Under a reasonable complexity-theoretic assumption, we do the same for log-depth circuits.by Prashant Nalini Vasudevan.Ph. D

    Improvements to Secure Computation with Penalties

    No full text
    ABSTRACT Motivated by the impossibility of achieving fairness in secure computation [Cleve, STOC 1986], recent works study a model of fairness in which an adversarial party that aborts on receiving output is forced to pay a mutually predefined monetary penalty to every other party that did not receive the output. These works show how to design protocols for secure computation with penalties that tolerate an arbitrary number of corruptions. In this work, we improve the efficiency of protocols for secure computation with penalties in a hybrid model where parties have access to the "claim-or-refund" transaction functionality. Our first improvement is for the ladder protocol of Bentov and Kumaresan (Crypto 2014) where we improve the dependence of the script complexity of the protocol (which corresponds to miner verification load and also space on the blockchain) on the number of parties from quadratic to linear (and in particular, is completely independent of the underlying function). Our second improvement is for the see-saw protocol of Kumaresan et al. (CCS 2015) where we reduce the total number of claim-or-refund transactions and also the script complexity from quadratic to linear in the number of parties. We also present a 'dual-mode' protocol that offers different guarantees depending on the number of corrupt parties: (1) when s < n/2 parties are corrupt, this protocol guarantees fairness (i.e., either all parties get the output or none do), and (2) when t > n/2 parties are corrupt, this protocol guarantees fairness with penalties (i.e., if the adversary gets the output, then either the honest parties get output as well or they get compensation via penalizing the adversary). The above protocol works as long as t + s < n, matching the bound obtained for secure computation protocols in the standard model (i.e., replacing "fairness with penalties" with "securitywith-abort" (full security except fairness)) by Ishai et al. (SICOMP 2011)

    Deletion inference, reconstruction, and compliance in machine (un)learning

    No full text
    10.56553/popets-2022-0079Proceedings on Privacy Enhancing Technologies20223415-43
    corecore